Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Modern climate projections lack adequate spatial and temporal resolution due to computational constraints. A consequence is inaccurate and imprecise predictions of critical processes such as storms. Hybrid methods that combine physics with machine learning (ML) have introduced a new generation of higher fidelity climate simulators that can sidestep Moore's Law by outsourcing compute-hungry, short, high-resolution simulations to ML emulators. However, this hybrid ML-physics simulation approach requires domain-specific treatment and has been inaccessible to ML experts because of lack of training data and relevant, easy-to-use workflows. We present ClimSim, the largest-ever dataset designed for hybrid ML-physics research. It comprises multi-scale climate simulations, developed by a consortium of climate scientists and ML researchers. It consists of 5.7 billion pairs of multivariate input and output vectors that isolate the influence of locally-nested, high-resolution, high-fidelity physics on a host climate simulator's macro-scale physical state.The dataset is global in coverage, spans multiple years at high sampling frequency, and is designed such that resulting emulators are compatible with downstream coupling into operational climate simulators. We implement a range of deterministic and stochastic regression baselines to highlight the ML challenges and their scoring. The data (https://huggingface.co/datasets/LEAP/ClimSim_high-res) and code (https://leap-stc.github.io/ClimSim) are released openly to support the development of hybrid ML-physics and high-fidelity climate simulations for the benefit of science and society.more » « less
-
A long standing open problem in the theory of neural networks is the development of quantitative methods to estimate and compare the capabilities of different architectures. Here we define the capacity of an architecture by the binary logarithm of the number of functions it can compute, as the synaptic weights are varied. The capacity provides an upper bound on the number of bits that can be extracted from the training data and stored in the architecture during learning. We study the capacity of layered, fully-connected, architectures of linear threshold neurons with L layers and show that in essence the capacity is given by a cubic polynomial in the layer sizes. In proving the main result, we also develop new techniques (multiplexing, enrichment, and stacking) as well as new bounds on the capacity of finite sets. We use the main result to identify architectures with maximal or minimal capacity under a number of natural constraints. This leads to the notion of structural regularization for deep architectures. While in general, everything else being equal, shallow networks compute more functions than deep networks, the functions computed by deep networks are more regular and “interesting".more » « less
-
Recently, Approximate Policy Iteration (API) algorithms have achieved superhuman proficiency in two-player zero-sum games such as Go, Chess, and Shogi without human data. These API algorithms iterate between two policies: a slow policy (tree search), and a fast policy (a neural network). In these two-player games, a reward is always received at the end of the game. However, the Rubik’s Cube has only a single solved state, and episodes are not guaranteed to terminate. This poses a major problem for these API algorithms since they rely on the reward received at the end of the game. We introduce Autodidactic Iteration: an API algorithm that overcomes the problem of sparse rewards by training on a distribution of states that allows the reward to propagate from the goal state to states farther away. Autodidactic Iteration is able to learn how to solve the Rubik’s Cube without relying on human data. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.more » « less
-
Double- and single-differential cross sections for inclusive charged-current -nucleus scattering are reported for the kinematic domain 0 to in three-momentum transfer and 0 to 2 GeV in available energy, at a mean energy of 1.86 GeV. The measurements are based on an estimated 995,760 charged-current (CC) interactions in the scintillator medium of the NOvA Near Detector. The subdomain populated by 2-particle-2-hole (2p2h) reactions is identified by the cross section excess relative to predictions for -nucleus scattering that are constrained by a data control sample. Models for 2-particle-2-hole processes are rated by comparisons of the predicted-versus-measured CC inclusive cross section over the full phase space and in the restricted subdomain. Shortfalls are observed in neutrino generator predictions obtained using the theory-based València and SuSAv2 2p2h models. Published by the American Physical Society2025more » « lessFree, publicly-accessible full text available March 1, 2026
-
We report a search for neutrino oscillations to sterile neutrinos under a model with three active and one sterile neutrinos ( model). This analysis uses the NOvA detectors exposed to the NuMI beam, running in neutrino mode. The data exposure, protons on target, doubles that previously analyzed by NOvA, and the analysis is the first to use charged-current interactions in conjunction with neutral-current interactions. Neutrino samples in the near and far detectors are fitted simultaneously, enabling the search to be carried out over a range extending 2 (3) orders of magnitude above (below) . NOvA finds no evidence for active-to-sterile neutrino oscillations under the model at 90% confidence level. New limits are reported in multiple regions of parameter space, excluding some regions currently allowed by IceCube at 90% confidence level. We additionally set the most stringent limits for anomalous appearance for . Published by the American Physical Society2025more » « lessFree, publicly-accessible full text available February 1, 2026
-
Abstract Measuring observables to constrain models using maximum-likelihood estimation is fundamental to many physics experiments. Wilks' theorem provides a simple way to construct confidence intervals on model parameters, but it only applies under certain conditions. These conditions, such as nested hypotheses and unbounded parameters, are often violated in neutrino oscillation measurements and other experimental scenarios. Monte Carlo methods can address these issues, albeit at increased computational cost. In the presence of nuisance parameters, however, the best way to implement a Monte Carlo method is ambiguous. This paper documents the method selected by the NOvA experiment, the profile construction. It presents the toy studies that informed the choice of method, details of its implementation, and tests performed to validate it. It also includes some practical considerations which may be of use to others choosing to use the profile construction.more » « lessFree, publicly-accessible full text available February 1, 2026
-
This Letter reports a search for charge-parity ( ) symmetry violating nonstandard interactions (NSI) of neutrinos with matter using the NOvA Experiment, and examines their effects on the determination of the standard oscillation parameters. Data from and oscillation channels are used to measure the effect of the NSI parameters and . With 90% CL the magnitudes of the NSI couplings are constrained to be and . A degeneracy at is reported, and we observe that the presence of NSI limits sensitivity to the standard phase . Published by the American Physical Society2024more » « less
-
NOvA is a long-baseline neutrino oscillation experiment that measures oscillations in charged-current (disappearance) and (appearance) channels, and their antineutrino counterparts, using neutrinos of energies around 2 GeV over a distance of 810 km. In this work we reanalyze the dataset first examined in our previous paper [] using an alternative statistical approach based on Bayesian Markov chain Monte Carlo. We measure oscillation parameters consistent with the previous results. We also extend our inferences to include the first NOvA measurements of the reactor mixing angle , where we find , and the Jarlskog invariant, where we observe no significant preference for the -conserving value over values favoring violation. We use these results to examine the effects of constraints from short-baseline measurements of using antineutrinos from nuclear reactors when making NOvA measurements of . Our long-baseline measurement of is shown to be consistent with the reactor measurements, supporting the general applicability and robustness of the Pontecorvo-Maki-Nakagawa-Sakata framework for neutrino oscillations. Published by the American Physical Society2024more » « less
-
Abstract The Pandora Software Development Kit and algorithm libraries perform reconstruction of neutrino interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at the Deep Underground Neutrino Experiment, which will operate four large-scale liquid argon time projection chambers at the far detector site in South Dakota, producing high-resolution images of charged particles emerging from neutrino interactions. While these high-resolution images provide excellent opportunities for physics, the complex topologies require sophisticated pattern recognition capabilities to interpret signals from the detectors as physically meaningful objects that form the inputs to physics analyses. A critical component is the identification of the neutrino interaction vertex. Subsequent reconstruction algorithms use this location to identify the individual primary particles and ensure they each result in a separate reconstructed particle. A new vertex-finding procedure described in this article integrates a U-ResNet neural network performing hit-level classification into the multi-algorithm approach used by Pandora to identify the neutrino interaction vertex. The machine learning solution is seamlessly integrated into a chain of pattern-recognition algorithms. The technique substantially outperforms the previous BDT-based solution, with a more than 20% increase in the efficiency of sub-1 cm vertex reconstruction across all neutrino flavours.more » « lessFree, publicly-accessible full text available June 1, 2026
An official website of the United States government

Full Text Available